Syntactic Abstraction in Component Interfaces

Culpepper, Owens, Flatt. Syntactic Abstraction in Component Interfaces. GPCE 2005.

In this paper, we show how to combine a component system and a macro system. A component system separates the definition of a program fragment from the statements that link it, enabling indepen­dent compilation of the fragment. A macro system, in contrast, relies on explicit links among fragments that import macros, since macro ex­pansion must happen at compile time. Our combination places macro definitions inside component signatures, thereby permitting macro ex­pansion at compile time, while still allowing independent compilation and linking for the run-­time part of components.

American lecture tour gets real

During the week of Nov. 7-11 I will be visiting five American universities and giving talks about our approach to teaching programming (as embodied by CTM) and about our research on distributed programming. The tour schedule and talk abstracts are available here. I will be happy to meet with people during the tour.

Zipper-based file server/OS

We present a file server/OS where threading and exceptions are all realized via delimited continuations. We use zipper to navigate within any term. If the term in question is a finite map whose keys are strings and values are either strings or other finite maps, the zipper-based file system looks almost the same as the Unix file system. Unlike the latter, however, we offer: transactional semantics; undo of any file and directory operation; snapshots; statically guaranteed the strongest, repeatable read, isolation mode for clients; built-in traversal facility; and just the right behavior for cyclic directory references.



We can easily change our file server to support NFS or 9P or other distributed file system protocol. We can traverse richer terms than mere finite maps with string keys. In particular, we can use lambda-terms as our file system: one can cd into a lambda-term in bash.



The implementation of ZFS has no unsafe operations, no GHC let alone Unix threads, and no concurrency problems. Our threads cannot even do IO and cannot mutate any global state --- and the type system sees to it. We thus demonstrate how delimited continuations let us statically isolate effects even if the whole program eventually runs in an IO monad.

zfs-talk: Expanded talk with the demo. It was originally presented as an extra demo at the Haskell Workshop 2005

ZFS.tar.gz: The complete source code, with comments. It was tested with GHC 6.4 on FreeBSD and Linux


From: Zipper-based file server/OS


Neat, a referentially transparent filesystem with transactional semantics in 540 lines of Haskell. I can think of some interesting uses for this.

XNHTML

Ning is a new free online service for building and using social applications.

The apps are built using PHP and a simple XML-vocabulary based DSL called XNHTML.

As I've been saying for a long time here and elsewhere, it's all about programmability these days, and as the Ning folks realise DSLs are a very good technique for achieving end-user programmability.

Seems to me they could have gone the extra mile, and eliminated the need for PHP altogether, but I guess that would be asking for too much...

Control-Flow Integrity

Two papers about CFI.

Control-Flow Integrity - Principles, Implementations, and Applications:

Current software attacks often build on exploits that subvert machine- code execution. The enforcement of a basic safety property, Control-Flow Integrity (CFI), can prevent such attacks from arbitrarily controlling program behavior. CFI enforcement is simple, and its guarantees can be established formally, even with respect to powerful adversaries. Moreover, CFI enforcement is practical: it is compatible with existing software and can be done efficiently using software rewriting in commodity systems. Finally, CFI provides a useful foundation for enforcing further security policies, as we demonstrate with efficient software implementations of a protected shadow call stack and of access control for memory regions.

A Theory of Secure Control-Flow:

Control-Flow Integrity (CFI) means that the execution of a program dynamically follows only certain paths, in accordance with a static policy. CFI can prevent attacks that, by exploiting buffer overflows and other vulnerabilities, attempt to control program behavior. This paper develops the basic theory that underlies two practical techniques for CFI enforcement, with precise formulations of hypotheses and guarantees.

Not very language oriented, I am afraid. However, this is related to PCC and TAL which were discussed here in the past.

[Admin] Probation

After considering public and private complaints and suggestions from valued members of the LtU community, and after discussion between some of the site editors, we have created a new "probationary" category of site user. A user account will be placed in this category for repeated behavior that is not consistent with LtU Etiquette. Comments by these users will be reviewed by an editor before being approved for publication on the site, and any such comments that lack constructive content, or are unnecessarily inflammatory, etc. will not be approved.

Currently, only one account has been placed in this category, and there are no current plans to add any other accounts (although new accounts making suspicious posts will be scrutinized carefully). As Ehud put it, genuine members of the community, who are sincere and care about the quality of discussion on the site, need not be concerned that their status will be changed, even if they occasionally receive an [Admin] notice. Such notices will be posted when necessary, and are intended as a gentle reminder to everyone about the kind of comments we'd prefer to avoid.

Scoping based on control flow graph

The Anatomy of a Loop by Olin Shivers

…the lexical-scope rule of the classic λ-calculus, which we’ll call “LC scope,” implies an important property: it ensures that binders dominate references. The key design idea that serves as the foundation for our extensible loop form is to invert cause and effect, taking the desired property as our definition of scope. This gives us a new scoping rule, which we call “BDR scope.”

From this concept, Shivers builds a sub-language for describing iterations which includes what is effectively a directed graph of scopes, rather than a simple tree. Parallel code paths (or perhaps a better description would be unordered code paths) can independently contribute to the binding scope of following code, without being visible to each other. (See the example of quicksort on page 11.)

Is this too wierd for real programmers? Or is it a stunningly elegant way of capturing something important about parallel code flows? I vote for the second but I wonder what other LtUers think

Combining computational effects

While some researchers seek to generalize monads (to arrows), others try to narrow the focus to achieve a richer theory (and probably deeper understanding).

Combining computational effects: commutativity and sum

We begin to develop a unified account of modularity for computational effects. We use the notion of enriched Lawvere theory, together with its relationship with strong monads, to reformulate Moggi's paradigm for modelling computational effects; we emphasise the importance here of the operations that induce computational effects. Effects qua theories are then combined by appropriate bifunctors (on the category of theories). We give a theory of the commutative combination of effects, which in particular yields Moggi's side-effects monad transformer (an application is the combination of side-effects with nondeterminism). And we give a theory for the sum of computational effects, which in particular yields Moggi's exceptions monad transformer (an application is the combination of exceptions with other effects).
A longer version: Combining Effects: Sum and Tensor

MIT CADR Source Snapshot Released

Via Lemonodor. More details can be found at Bill Clementson's Blog.

Link to files

Number-Parameterized Types by Oleg Kiselyov

Number-Parameterized Types by Oleg Kiselyov

from the abstract:

This paper describes practical programming with types parameterized by numbers: e.g., an array type parameterized by the array's size or a modular group type Zn parameterized by the modulus. An attempt to add, for example, two integers of different moduli should result in a compile-time error with a clear error message. Number-parameterized types let the programmer capture more invariants through types and eliminate some run-time checks.

Oleg shows several approaches towards encoding numbers into types and using those numbers to check list length, matching sizes for matrices or vectors. Oleg also points out connections to dependent types, phantom types, and shape-invariant programming.